Ionic Liquids (ILs) provide a promising solution for CO$_2$ capture and storage to mitigate global warming. However, identifying and designing the high-capacity IL from the giant chemical space requires expensive, and exhaustive simulations and experiments. Machine learning (ML) can accelerate the process of searching for desirable ionic molecules through accurate and efficient property predictions in a data-driven manner. But existing descriptors and ML models for the ionic molecule suffer from the inefficient adaptation of molecular graph structure. Besides, few works have investigated the explainability of ML models to help understand the learned features that can guide the design of efficient ionic molecules. In this work, we develop both fingerprint-based ML models and Graph Neural Networks (GNNs) to predict the CO$_2$ absorption in ILs. Fingerprint works on graph structure at the feature extraction stage, while GNNs directly handle molecule structure in both the feature extraction and model prediction stage. We show that our method outperforms previous ML models by reaching a high accuracy (MAE of 0.0137, $R^2$ of 0.9884). Furthermore, we take the advantage of GNNs feature representation and develop a substructure-based explanation method that provides insight into how each chemical fragments within IL molecules contribute to the CO$_2$ absorption prediction of ML models. We also show that our explanation result agrees with some ground truth from the theoretical reaction mechanism of CO$_2$ absorption in ILs, which can advise on the design of novel and efficient functional ILs in the future.
translated by 谷歌翻译
在多机构系统(例如多机构无人驾驶汽车和多机构自动驾驶水下车辆)中,羊群控制是一个重大问题,可增强代理的合作和安全性。与传统方法相反,多机构增强学习(MARL)更灵活地解决了羊群控制的问题。但是,基于MARL的方法遭受了样本效率低下的影响,因为它们需要从代理与环境之间的相互作用中收集大量的经验。我们提出了一种新颖的方法,该方法对MARL(PWD-MARL)的示范进行了预处理,该方法可以利用以传统方法预处理剂来利用非专家示范。在预审进过程中,代理人同时通过MARL和行为克隆从示范中学习政策,并阻止过度拟合示范。通过对非专家示范进行预处理,PWD-MARL在温暖的开始中提高了在线MAL的样品效率。实验表明,即使发生不良或很少的示威,PWD-MARL在羊群控制问题中提高了样本效率和政策性能。
translated by 谷歌翻译
羊群控制是一个具有挑战性的问题,在维持羊群的同时,需要达到目标位置,并避免了环境中特工之间的障碍和碰撞碰撞。多代理增强学习在羊群控制中取得了有希望的表现。但是,基于传统强化学习的方法需要代理与环境之间的相互作用。本文提出了一项次优政策帮助多代理增强学习算法(SPA-MARL),以提高样本效率。 Spa-Marl直接利用可以通过非学习方法手动设计或解决的先前政策来帮助代理人学习,在这种情况下,该策略的表现可以是最佳的。 SPA-MARL认识到次优政策与本身之间的性能差异,然后模仿次优政策,如果次优政策更好。我们利用Spa-Marl解决羊群控制问题。基于人造潜在领域的传统控制方法用于生成次优政策。实验表明,水疗中心可以加快训练过程,并优于MARL基线和所使用的次优政策。
translated by 谷歌翻译
在恢复低分辨率灰度图像的实际应用中,我们通常需要为目标设备运行三个单独的图像着色,超分辨率和Dows采样操作。但是,该管道对于独立进程是冗余的并且低效,并且可以共享一些内部特征。因此,我们提出了一种有效的范例来执行{s} {s} {c} olorization和{s} Uper分辨率(SCS),并提出了端到端的SCSNet来实现这一目标。该方法由两部分组成:用于学习颜色信息的彩色分支,用于采用所提出的即插即用\ EMPH {金字塔阀跨关注}(PVCATTN)模块来聚合源和参考图像之间的特征映射;和超分辨率分支集成颜色和纹理信息以预测使用设计的\ emph {连续像素映射}(CPM)模块的目标图像来预测连续放大率的高分辨率图像。此外,我们的SCSNet支持对实际应用更灵活的自动和参照模式。丰富的实验证明了我们通过最先进的方法生成真实图像的方法的优越性,例如,平均降低了1.8 $ \ Depararrow $和5.1 $ \ Downarrow $相比,与自动和参照模式的最佳分数相比,分别在拥有更少的参数(超过$ \ \倍$ 2 $ \ dovearrow $)和更快的运行速度(超过$ \ times $ 3 $ \ Uprarow $)。
translated by 谷歌翻译
非政策评估(OPE)是用其他策略生成的数据评估目标策略。大多数以前的OPE方法都侧重于精确估计策略的真实绩效。我们观察到,在许多应用程序中,(1)OPE的最终目标是比较两个或多个候选策略并选择一个好的策略,这比精确评估其真实绩效要简单得多; (2)通常已经部署了多种政策来为现实世界中的用户提供服务,因此可以知道这些策略的真实绩效。受到这两个观察结果的启发,在这项工作中,我们研究了一个新问题,监督了政体排名(SOPR),该排名旨在通过利用现有绩效的非政策数据和策略来对基于监督学习的一组目标策略进行排名。我们提出了一种解决SOPR的方法,该方法通过最大程度地减少培训政策的排名损失而不是估算精确的政策绩效来学习政策评分模型。我们方法中的评分模型是一个基于层次变压器的模型,将一组状态行动对映射到一个分数,其中每对的状态来自非政策数据,而目标策略是在状态上采取的。以离线方式。公共数据集的广泛实验表明,我们的方法在等级相关性,遗憾价值和稳定性方面优于基线方法。我们的代码在GitHub公开获得。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译
In robust Markov decision processes (MDPs), the uncertainty in the transition kernel is addressed by finding a policy that optimizes the worst-case performance over an uncertainty set of MDPs. While much of the literature has focused on discounted MDPs, robust average-reward MDPs remain largely unexplored. In this paper, we focus on robust average-reward MDPs, where the goal is to find a policy that optimizes the worst-case average reward over an uncertainty set. We first take an approach that approximates average-reward MDPs using discounted MDPs. We prove that the robust discounted value function converges to the robust average-reward as the discount factor $\gamma$ goes to $1$, and moreover, when $\gamma$ is large, any optimal policy of the robust discounted MDP is also an optimal policy of the robust average-reward. We further design a robust dynamic programming approach, and theoretically characterize its convergence to the optimum. Then, we investigate robust average-reward MDPs directly without using discounted MDPs as an intermediate step. We derive the robust Bellman equation for robust average-reward MDPs, prove that the optimal policy can be derived from its solution, and further design a robust relative value iteration algorithm that provably finds its solution, or equivalently, the optimal robust policy.
translated by 谷歌翻译